翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

approximate entropy : ウィキペディア英語版
approximate entropy
In statistics, an approximate entropy (ApEn) is a technique used to quantify the amount of regularity and the unpredictability of fluctuations over time-series data.〔

For example, there are two series of data:
: series 1: (10,20,10,20,10,20,10,20,10,20,10,20...), which alternates 10 and 20.
: series 2: (10,10,20,10,20,20,20,10,10,20,10,20,20...), which has either a value of 10 or 20, chosen randomly, each with probability 1/2.
Moment statistics, such as mean and variance, will not distinguish between these two series. Nor will rank order statistics distinguish between these series. Yet series 1 is "perfectly regular"; knowing one term has the value of 20 enables one to predict with certainty that the next term will have the value of 10. Series 2 is randomly valued; knowing one term has the value of 20 gives no insight into what value the next term will have.
Regularity was originally measured by exact regularity statistics, which has mainly centered on various entropy measures.〔
However, accurate entropy calculation requires vast amounts of data, and the results will be greatly influenced by system noise,〔
〕 therefore it is not practical to apply these methods to experimental data. ApEn was developed by Steve M. Pincus to handle these limitations by modifying an exact regularity statistic, Kolmogorov–Sinai entropy. ApEn was initially developed to analyze medical data, such as heart rate,〔 and later spread its applications in finance,〔
psychology,〔
〕 and human factors engineering.〔

==The algorithm==
\text: Form a time series of data \ u(1), u(2),\ldots, u(N). These are \text raw data values from measurement equally spaced in time.
\text: Fix \ m , an integer, and \ r, a positive real number. The value of \ m represents the length of compared run of data, and \ r specifies a filtering level.
\text: Form a sequence of vectors \mathbf(1),\mathbf(2),\ldots,\mathbf(N-m+1), in \mathbf^, real \ m-dimensional space defined by \mathbf(i) = ().
\text: Use the sequence \mathbf(1),\mathbf(2),\ldots,\mathbf(N-m+1) to construct, for each \ i , 1 \le i \le N-m+1
: C_i^m (r)=(\text x(j) \text d() < r)/(N-m+1) \,
in which \ d(x^
* ) is defined as
: d()=\max_a |u(a)-u^
*(a)| \,
The \ u(a) are the \text scalar components of \mathbf . \ d represents the distance between the vectors \mathbf(i) and \mathbf(j) , given by the maximum difference in their respective scalar components. Note that j takes on all values, so the match provided when i=j will be counted (the subsequence is matched against itself).
\text: Define
: \Phi ^m (r) = (N-m+1)^ \sum_^log(C_i^m (r)),
\text: Define approximate entropy \ (\mathrm) as
:\ \mathrm = \Phi ^m (r) - \Phi^ (r).
where \ log is the natural logarithm, for \ m and \ r fixed as in Step 2.
Parameter selection: typically choose \ m=2 or \ m=3 , and \ r depends greatly on the application.
An implementation on Physionet,〔()〕 which is based on Pincus 〔 use d() < r whereas the original article uses d() \le r in Step 4. While a concern for artificially constructed examples, it is usually not a concern in practice.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「approximate entropy」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.